Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Añadir filtros

Base de datos
Tipo del documento
Intervalo de año
1.
Ieee Access ; 10:78726-78738, 2022.
Artículo en Inglés | Web of Science | ID: covidwho-1985442

RESUMEN

When a learned model has high accuracy under familiar settings (internal testing) and a big drop in accuracy under slightly different circumstances (external testing) we suspect it is using shortcuts to make decisions. This problem is known as shortcut learning. In medical imaging, shortcuts are undesired and unintended features that the model relies on to perform diagnosis. Shortcut-based solutions using medical images could lead to false diagnoses and have dangerous implications for patients. In the current COVID-19 era, a large set of papers have been published proposing the use of deep convolutional neural networks to perform diagnosis or triage of COVID-19 from chest X-rays (CXRs). These studies are reporting high accuracies which could be misleading and overestimated. To our knowledge, none of the currently published papers with high performance reported testing on samples from truly unseen data sources. Studies which did, have noticed a significant performance drop when testing on unseen sources indicating a failure to generalize. In this paper, we elucidate the generalization challenge of deep learning based models trained for disease diagnosis. We use the example of COVID-19 diagnosis from CXRs. Solutions that mitigate shortcut learning are introduced and experimentally shown to be effective. Our proposed methods enable the models to have a statistically significantly reduced performance drop-off on unseen data sources. Thus, lowering the performance drop to only 9% instead of 20%. The issues with convolutional neural networks addressed here generally apply to other imaging modalities and recognition problems, as shown.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA